Abstract:
Intrusion Detection Systems (IDSs) are pivotal for network security; while machine learning based IDSs surpass traditional models in effectiveness, their growing complexity poses transparency challenges. This study uses the UNSW-NB15 dataset to train the ML algorithms, aiming to demystify the complexity of IoT network intrusion detection. The Explainable Artificial Intelligence (XAI) framework is used to improve model comprehensibility and transparency. Scikit-Learn, ELI5 Permutation Importance, and Local Interpretable Model-Agnostic Explanation (LIME) are applied to analyze the performance of many ML algorithms. This study also investigates the influence of dataset balancing on the performance metrics of various ML algorithms. SVM accuracy rose from 86 to 88 percent, while Random Forest and CatBoost accuracy climbed from 90 to 92 percent after balancing. Ensemble combinations also showed improved performance. ELI5 and LIME were then applied to the ML algorithms. The methodology presented in this paper offers a valuable toolkit for cybersecurity experts, empowering them to make informed decisions in the face of evolving cyber threats. The findings support the integration of XAI approaches with conventional ML systems to improve interpretability in cybersecurity applications. This study enhances IDSs for IoT networks by bridging the gap between ML-based prediction performance and the need for transparent and interpretable decision-making.